In this example we will examine multivariate forecasting models using mvgam, which fits GAMs using MCMC sampling via the JAGS software (Note that JAGS is required; installation links are found here). First a simulation experiment to determine whether mvgam's inclusion of complexity penalisation works by reducing the number of un-needed dynamic factors. In any factor model, choosing the appropriate number of factors K can be difficult. The approach used by mvgam is to estimate a penalty for each factor that squeezes the factor's variance toward zero, effectively forcing the factor to evolve as a flat white noise process. By allowing each factor's penalty to be estimated in an exponentially increasing manner (following Welty, Leah J., et al. Bayesian distributed lag models: estimating effects of particulate matter air pollution on daily mortality Biometrics 65.1 (2009): 282-291), we hope that we can guard against specying too large a K. But we should test this property to see how it behaves in different scenarios. Begin by simulating 8 series that evolve with a shared seasonal pattern and that depend on 2 latent random walk factors. Each series is 100 time steps long, with a seasonal frequency of 12. We give the trend moderate importance by setting trend_rel = 0.6 and we allow each series' observation process to be drawn from slightly different Negative Binomial distributions

set.seed(100)
library(mvgam)
## Loading required package: mgcv
## Loading required package: nlme
## This is mgcv 1.8-33. For overview type 'help("mgcv-package")'.
## Loading required package: rjags
## Loading required package: coda
## Linked to JAGS 4.3.0
## Loaded modules: basemod,bugs
## Loading required package: parallel
## Loading required package: runjags
## Warning: package 'runjags' was built under R version 4.0.5
dat <- sim_mvgam(T = 100, n_series = 8, n_trends = 2, 
    mu_obs = runif(8, 2, 6), size_obs = runif(8, 
        0.5, 2), trend_rel = 0.6, train_prop = 0.8)
## Warning: replacing previous import 'lifecycle::last_warnings' by
## 'rlang::last_warnings' when loading 'tibble'
## Warning: replacing previous import 'lifecycle::last_warnings' by
## 'rlang::last_warnings' when loading 'pillar'
## Registered S3 method overwritten by 'quantmod':
##   method            from
##   as.zoo.data.frame zoo

Have a look at the series

par(mfrow = c(4, 2))
for (i in 1:8) {
    plot(dat$data_train$y[which(as.numeric(dat$data_train$series) == 
        i)], type = "l", ylab = paste("Series", 
        i), xlab = "Time")
}

par(mfrow = c(1, 1))

Clearly there are some correlations in the trends for these series. But how does a dynamic factor process allow us to potentially capture these dependencies? The below example demonstrates how. Essentially, a dynamic factor is an unmeasured (latent) random process that induces correlations between time series via a set of factor loadings (\(\beta\)) while exercising dimension reduction. The loadings represent constant associations between the observed time series and the dynamic factor, but each series can still deviate from the factor through its error process and its associations with other factors (if we estimate >1 latent factor in our model).

A challenge with any factor model is the need to determine the number of factors K. Setting K too small prevents temporal dependencies from being adequately modelled, leading to poor convergence and difficulty estimating smooth parameters. By contrast, setting K too large leads to unnecessary computation. mvgam approaches this problem by formulating a prior distribution that enforces exponentially increasing penalties on the factor variances to allow any un-needed factors to evolve as flat lines. Now let's fit a well-specified model for our simulated series in which we estimate random intercepts, a shared seasonal cyclic smooth and 2 latent dynamic factors

mod1 <- mvjagam(data_train = dat$data_train, 
    data_test = dat$data_test, formula = y ~ 
        s(series, bs = "re") + s(season, 
            bs = c("cc"), k = 12), knots = list(season = c(0.5, 
        12.5)), use_lv = T, n_lv = 2, family = "nb", 
    trend_model = "RW", chains = 4, burnin = 2000)
## Fitting a multivariate GAM with latent dynamic factors for the trends...
## NOTE: Stopping adaptation

Look at a few plots. The estimated smooth function

plot_mvgam_smooth(object = mod1, series = 1, 
    smooth = "season")

And the true seasonal function in the simulation

plot(dat$global_seasonality[1:12], type = "l")

Check whether each factor was retained using the plot_mvgam_factors function. Here, each factor is tested against a null hypothesis of white noise by calculating the sum of the factor's 1st derivatives. A factor that has a larger contribution to the series' latent trends will have a larger sum, both because that factor's absolute magnitudes will be larger (due to the weaker penalty on the factor's precision) and because the factor will move around more. By normalising these estimated first derivative sums, it should be apparent whether some factors have been dropped from the model. Here we see that each factor is contributing to the series' latent trends, and the plots show that neither has been forced to evolve as white noise

plot_mvgam_factors(mod1)

Now we fit the same model but assume that we no nothing about how many factors to use, so we specify the maximum allowed (the total number of series; 8). Note that this model is computationally more expensive so it will take longer to fit

mod2 <- mvjagam(data_train = dat$data_train, 
    data_test = dat$data_test, formula = y ~ 
        s(series, bs = "re") + s(season, 
            bs = c("cc"), k = 12), knots = list(season = c(0.5, 
        12.5)), use_lv = T, n_lv = 8, family = "nb", 
    trend_model = "RW", chains = 4, burnin = 2000)
## Fitting a multivariate GAM with latent dynamic factors for the trends...
## NOTE: Stopping adaptation

The same plots as model 1 to see if this model has also fit the data well

plot_mvgam_smooth(object = mod2, series = 1, 
    smooth = "season")

plot(dat$global_seasonality[1:12], type = "l")

Examining the factor contributions gives us some insight into whether we set n_lv larger than we perhaps needed to. These contributions can be interpreted similarly to ordination axes when deciding how many latent variables to specify

plot_mvgam_factors(mod2)

The very weak contributions by some of the factors are a result of the penalisation, which will become more important as the dimensionality of the data grows. Now onto an empirical example. Here we will access monthly search volume data from Google Trends, focusing on relative importances of search terms related to tick paralysis in Queensland, Australia

library(tidyr)
if (!require(gtrendsR)) {
    install.packages("gtrendsR")
}

terms = c("tick bite", "tick paralysis", 
    "dog tick", "paralysis tick dog")
trends <- gtrendsR::gtrends(terms, geo = "AU-QLD", 
    time = "all", onlyInterest = T)

Google Trends modified their algorithm for extracting search volume data in 2012, so we filter the series to only include observations after this point in time

gtest <- trends$interest_over_time %>% tidyr::spread(keyword, 
    hits) %>% dplyr::select(-geo, -time, 
    -gprop, -category) %>% dplyr::mutate(date = lubridate::ymd(date)) %>% 
    dplyr::mutate(year = lubridate::year(date)) %>% 
    dplyr::filter(year > 2012) %>% dplyr::select(-year)

Convert to an xts object and then to the required mvgam format, holding out the final 10% of observations as the test data

series <- xts::xts(x = gtest[, -1], order.by = gtest$date)
trends_data <- series_to_mvgam(series, freq = 12, 
    train_prop = 0.9)

Plot the series to see how similar their seasonal shapes are over time

plot(series, legend.loc = "topleft")

Now we will fit an mvgam model with shared seasonality and random intercepts per series. Our first attempt will ignore any temporal component in the residuals so that we can identidy which GAM predictor combination gives us the best fit, prior to investigating how to deal with any remaining autocorrelation. We assume a Negative Binomial distrubution with series-specific overdispersion parameters for the response. We use a complexity-penalising prior for the overdispersion, which allows the model to reduce toward a more simple Poisson observation process unless the data provide adequate information to support overdispersion. Also note that any smooths using the random effects basis (s(series, bs = "re") below) are automatically re-parameterised to use the non-centred parameterisation that is necessary to help avoid common posterior degeneracies in hierarchical models. This parameterisation tends to work better for most ecological problems where the data for each group / context are not highly informative, but it is still probably worth investigating whether a centred or even a mix of centred / non-centred will give better computational performance. We suppress the global intercept as it is not needed and will lead to identifiability issues when estimating the series-specific random intercepts

trends_mod1 <- mvjagam(data_train = trends_data$data_train, 
    data_test = trends_data$data_test, formula = y ~ 
        s(series, bs = "re") + s(season, 
            k = 12, m = 2, bs = "cc") - 1, 
    knots = list(season = c(0.5, 12.5)), 
    trend_model = "None", family = "nb", 
    chains = 4, burnin = 8000)
## NOTE: Stopping adaptation

Given that these series could potentially be following a hierarchical seasonality, we will also trial a slghtly more complex model with an extra smoothing term per series that allows its seasonal curve to deviate from the global seasonal smooth. Ignore the warning about repeated smooths, as this is not an issue for estimation.

trends_mod2 <- mvjagam(data_train = trends_data$data_train, 
    data_test = trends_data$data_test, formula = y ~ 
        s(season, k = 12, m = 2, bs = "cc") + 
            s(season, series, k = 5, bs = "fs", 
                m = 1), knots = list(season = c(0.5, 
        12.5)), trend_model = "None", family = "nb", 
    chains = 4, burnin = 8000)
## NOTE: Stopping adaptation

How can we compare these models to ensure we choose one that performs well and provides useful inferences? Beyond posterior retrodictive and predictive checks, we can take advantage of the fact that mvgam fits an mgcv model to provide all the necessary penalty matrices, as well as to identify good initial values for smoothing parameters. Because we did not modify this model by adding a trend component (the only modification is that we estimated series-specific overdispersion parameters), we can still employ the usual mgcv model comparison routines

anova(trends_mod1$mgcv_model, trends_mod2$mgcv_model, 
    test = "LRT")
AIC(trends_mod1$mgcv_model, trends_mod2$mgcv_model)
summary(trends_mod1$mgcv_model)
## 
## Family: Negative Binomial(15.727) 
## Link function: log 
## 
## Formula:
## y ~ s(series, bs = "re") + s(season, k = 12, m = 2, bs = "cc") - 
##     1
## 
## Approximate significance of smooth terms:
##             edf Ref.df Chi.sq p-value    
## s(series) 3.999      4  29565  <2e-16 ***
## s(season) 6.735     10  98196  <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =   0.79   Deviance explained = 80.6%
## -REML = 1308.6  Scale est. = 1         n = 400
summary(trends_mod2$mgcv_model)
## 
## Family: Negative Binomial(24.442) 
## Link function: log 
## 
## Formula:
## y ~ s(season, k = 12, m = 2, bs = "cc") + s(season, series, k = 5, 
##     bs = "fs", m = 1)
## 
## Parametric coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   2.7652     0.4338   6.375 1.83e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##                     edf Ref.df Chi.sq p-value    
## s(season)         6.464     10    124  <2e-16 ***
## s(season,series) 13.085     19   1271  <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =  0.835   Deviance explained = 84.4%
## -REML = 1277.5  Scale est. = 1         n = 400

WE can also use JAGS routines for interrogating models. Which model minimises in-sample DIC?

runjags::extract(trends_mod1$jags_model, 
    "dic")
## Mean deviance:  2562 
## penalty 16.72 
## Penalized deviance: 2579
runjags::extract(trends_mod2$jags_model, 
    "dic")
## Mean deviance:  2485 
## penalty 27.45 
## Penalized deviance: 2512

Model 2 seems to fit better so far, suggesting that hierarchical seasonality gives better performance for these series. But a problem with both of the above models is that their forecast uncertainties will not increase into the future, which is not how time series forecasts should behave. Here we fit Model 2 again but specifying a time series model for the latent trends. We assume the dynamic trends can be represented using latent factors that each follow an AR1 process, and we will rely on the exponential penalties to help regularise any un-needed factors by setting n_lv = 4

trends_mod3 <- mvjagam(data_train = trends_data$data_train, 
    data_test = trends_data$data_test, formula = y ~ 
        s(season, k = 12, m = 2, bs = "cc") + 
            s(season, series, k = 5, bs = "fs", 
                m = 1), knots = list(season = c(0.5, 
        12.5)), trend_model = "AR1", use_lv = T, 
    n_lv = 4, family = "nb", chains = 4, 
    burnin = 8000)
## Fitting a multivariate GAM with latent dynamic factors for the trends...
## NOTE: Stopping adaptation

Have a look at the returned JAGS model file to see how the dynamic factors are incorporated. Notice that the precisions of factors, together with each factor's set of loadings, are penalised using a regularised horseshoe prior

trends_mod3$model_file
##   [1] "model {"                                                                                             
##   [2] ""                                                                                                    
##   [3] "        ## GAM linear predictor"                                                                     
##   [4] "        eta <- X %*% b"                                                                              
##   [5] ""                                                                                                    
##   [6] "        ## Mean expectations"                                                                        
##   [7] "        for (i in 1:n) {"                                                                            
##   [8] "        for (s in 1:n_series) {"                                                                     
##   [9] "        mu[i, s] <- exp(eta[ytimes[i, s]] + trend[i, s])"                                            
##  [10] "        }"                                                                                           
##  [11] "        }"                                                                                           
##  [12] ""                                                                                                    
##  [13] "        ## Latent factors evolve as time series with penalised precisions;"                          
##  [14] "        ## the penalty terms force any un-needed factors to evolve as flat lines"                    
##  [15] "        for(j in 1:n_lv){"                                                                           
##  [16] "         LV[1, j] ~ dnorm(0, penalty[j])"                                                            
##  [17] "        }"                                                                                           
##  [18] ""                                                                                                    
##  [19] "        for(j in 1:n_lv){"                                                                           
##  [20] "         LV[2, j] ~ dnorm(phi[j] + ar1[j]*LV[1, j], penalty[j])"                                     
##  [21] "        }"                                                                                           
##  [22] ""                                                                                                    
##  [23] "        for(j in 1:n_lv){"                                                                           
##  [24] "         LV[3, j] ~ dnorm(phi[j] + ar1[j]*LV[2, j] + ar2[j]*LV[1, j], penalty[j])"                   
##  [25] "        }"                                                                                           
##  [26] ""                                                                                                    
##  [27] "        for(i in 4:n){"                                                                              
##  [28] "         for(j in 1:n_lv){"                                                                          
##  [29] "          LV[i, j] ~ dnorm(phi[j] + ar1[j]*LV[i - 1, j] +"                                           
##  [30] "                          ar2[j]*LV[i - 2, j] + ar3[j]*LV[i - 3, j], penalty[j])"                    
##  [31] "         }"                                                                                          
##  [32] "        }"                                                                                           
##  [33] ""                                                                                                    
##  [34] "        ## AR components"                                                                            
##  [35] "        for (s in 1:n_lv){"                                                                          
##  [36] "        phi[s] <- 0"                                                                                 
##  [37] "        ar1[s] ~ dnorm(0, 10)"                                                                       
##  [38] "        ar2[s] <- 0"                                                                                 
##  [39] "        ar3[s] <- 0"                                                                                 
##  [40] "        }"                                                                                           
##  [41] ""                                                                                                    
##  [42] "        ## Shrinkage penalties for each factor squeeze the factor to a flat line and squeeze"        
##  [43] "        ## the entire factor toward a flat white noise process if supported by"                      
##  [44] "        ## the data. The prior for individual factor penalties allows each factor to possibly"       
##  [45] "        ## have a relatively large penalty, which shrinks the prior for that factor's variance"      
##  [46] "        ## substantially. Penalties increase exponentially with the number of factors following"     
##  [47] "        ## Welty, Leah J., et al. Bayesian distributed lag models: estimating effects of particulate"
##  [48] "        ## matter air pollution on daily mortality Biometrics 65.1 (2009): 282-291."                 
##  [49] "        pi ~ dunif(0, n_lv)"                                                                         
##  [50] "        X2 ~ dnorm(0, 1)T(0, )"                                                                      
##  [51] ""                                                                                                    
##  [52] "        # eta1 controls the baseline penalty"                                                        
##  [53] "        eta1 ~ dunif(-1, 1)"                                                                         
##  [54] ""                                                                                                    
##  [55] "        # eta2 controls how quickly the penalties exponentially increase"                            
##  [56] "        eta2 ~ dunif(-1, 1)"                                                                         
##  [57] ""                                                                                                    
##  [58] "        for(t in 1:n_lv){"                                                                           
##  [59] "         X1[t] ~ dnorm(0, 1)T(0, )"                                                                  
##  [60] "         l.dist[t] <- max(t, pi[])"                                                                  
##  [61] "         l.weight[t] <- exp(eta2[] * l.dist[t])"                                                     
##  [62] "         l.var[t] <- exp(eta1[] * l.dist[t] / 2) * 1"                                                
##  [63] "         theta.prime[t] <- l.weight[t] * X1[t] + (1 - l.weight[t]) * X2[]"                           
##  [64] "         penalty[t] <- max(min_eps, theta.prime[t] * l.var[t])"                                      
##  [65] "        }"                                                                                           
##  [66] ""                                                                                                    
##  [67] "        ## Latent factor loadings: standard normal with identifiability constraints"                 
##  [68] "        ## Upper triangle of loading matrix set to zero"                                             
##  [69] "        for(j in 1:(n_lv - 1)){"                                                                     
##  [70] "          for(j2 in (j + 1):n_lv){"                                                                  
##  [71] "           lv_coefs[j, j2] <- 0"                                                                     
##  [72] "          }"                                                                                         
##  [73] "         }"                                                                                          
##  [74] ""                                                                                                    
##  [75] "        ## Positive constraints on loading diagonals"                                                
##  [76] "        for(j in 1:n_lv) {"                                                                          
##  [77] "         lv_coefs[j, j] ~ dnorm(0, 1)T(0, 1);"                                                       
##  [78] "        }"                                                                                           
##  [79] ""                                                                                                    
##  [80] "        ## Lower diagonal free"                                                                      
##  [81] "        for(j in 2:n_lv){"                                                                           
##  [82] "         for(j2 in 1:(j - 1)){"                                                                      
##  [83] "          lv_coefs[j, j2] ~ dnorm(0, 1)T(-1, 1);"                                                    
##  [84] "         }"                                                                                          
##  [85] "       }"                                                                                            
##  [86] ""                                                                                                    
##  [87] "        ## Other elements also free"                                                                 
##  [88] "        for(j in (n_lv + 1):n_series) {"                                                             
##  [89] "         for(j2 in 1:n_lv){"                                                                         
##  [90] "          lv_coefs[j, j2] ~ dnorm(0, 1)T(-1, 1);"                                                    
##  [91] "         }"                                                                                          
##  [92] "        }"                                                                                           
##  [93] ""                                                                                                    
##  [94] "        ## Trend evolution for the series depends on latent factors"                                 
##  [95] "        for (i in 1:n){"                                                                             
##  [96] "        for (s in 1:n_series){"                                                                      
##  [97] "         trend[i, s] <- inprod(lv_coefs[s,], LV[i,])"                                                
##  [98] "        }"                                                                                           
##  [99] "        }"                                                                                           
## [100] ""                                                                                                    
## [101] "        ## Negative binomial likelihood functions"                                                   
## [102] "        for (i in 1:n) {"                                                                            
## [103] "        for (s in 1:n_series) {"                                                                     
## [104] "  y[i, s] ~ dnegbin(rate[i, s], r[s])"                                                               
## [105] "        rate[i, s] <- ifelse((r[s] / (r[s] + mu[i, s])) < min_eps, min_eps,"                         
## [106] "                            (r[s] / (r[s] + mu[i, s])))"                                             
## [107] "        }"                                                                                           
## [108] "        }"                                                                                           
## [109] ""                                                                                                    
## [110] "        ## Complexity penalising prior for the overdispersion parameter;"                            
## [111] "        ## where the likelihood reduces to a 'base' model (Poisson) unless"                          
## [112] "        ## the data support overdispersion"                                                          
## [113] "        for(s in 1:n_series){"                                                                       
## [114] "         r[s] <- pow(r_raw[s], 2)"                                                                   
## [115] "         r_raw[s] ~ dexp(0.05)"                                                                      
## [116] "        }"                                                                                           
## [117] ""                                                                                                    
## [118] "        ## Posterior predictions"                                                                    
## [119] "        for (i in 1:n) {"                                                                            
## [120] "        for (s in 1:n_series) {"                                                                     
## [121] "  ypred[i, s] ~ dnegbin(rate[i, s], r[s])"                                                           
## [122] "        }"                                                                                           
## [123] "        }"                                                                                           
## [124] "        "                                                                                            
## [125] "  ## parametric effect priors (regularised for identifiability)"                                     
## [126] "  for (i in 1:1) { b[i] ~ dnorm(p_coefs[i], p_taus[i]) }"                                            
## [127] "  ## prior for s(season)... "                                                                        
## [128] "  K1 <- S1[1:10,1:10] * lambda[1] "                                                                  
## [129] "  b[2:11] ~ dmnorm(zero[2:11],K1) "                                                                  
## [130] "  ## prior for s(season,series)... "                                                                 
## [131] "  for (i in c(12:15,17:20,22:25,27:30)) { b[i] ~ dnorm(0, lambda[2]) }"                              
## [132] "  for (i in c(16,21,26,31)) { b[i] ~ dnorm(0, lambda[3]) }"                                          
## [133] "   ## smoothing parameter priors..."                                                                 
## [134] "  for (i in 1:3) {"                                                                                  
## [135] "   lambda[i] ~ dexp(1/sp[i])"                                                                        
## [136] "    rho[i] <- log(lambda[i])"                                                                        
## [137] "  }"                                                                                                 
## [138] "}"

Inspection of the dynamic factors and their relative contributions indicates that the first factor is by far the most important

plot_mvgam_factors(trends_mod3)

How do forecasts for this model compare to the previous one that did not include any trend component?

compare_mvgams(trends_mod2, trends_mod3, 
    fc_horizon = 6, n_cores = 3, n_evaluations = 20)
## DRPS summaries per model (lower is better)
##             Min.  1st Qu.   Median     Mean  3rd Qu.     Max.
## Model 1 453.0835 469.1107 487.3614 482.3549 495.7036 504.8115
## Model 2 222.3283 246.9252 260.6688 259.2827 276.0981 288.4454
## 
## 90% interval coverages per model (closer to 0.9 is better)
## Model 1 1 
## Model 2 0.9333333

Model 3 (with the dynamic trend) provides far superior forecasts than relying only on the estimated smooths. Inspect the model summary (note again that p-value approximations are a work in progress here and so may not be totally reliable).

summary_mvgam(trends_mod3)
## GAM formula:
## y ~ s(season, k = 12, m = 2, bs = "cc") + s(season, series, k = 5, 
##     bs = "fs", m = 1)
## 
## Family:
## Negative Binomial
## 
## N latent factors:
## 4
## 
## N series:
## 4
## 
## N observations per series:
## 100
## 
## Dispersion parameter estimates:
##          2.5%      50%    97.5% Rhat n.eff
## r[1] 50.24281 529.6039 6898.700 1.01  4562
## r[2] 41.52333 522.1178 6778.218 1.00  5587
## r[3] 17.32578 165.8501 5027.514 1.01  2602
## r[4] 65.89264 614.2587 7437.362 1.00  4813
## 
## GAM smooth term approximate significances:
##                     edf Ref.df Chi.sq p-value    
## s(season)         7.371 10.000  170.3  <2e-16 ***
## s(season,series) 10.489 20.000 1060.6  <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## GAM coefficient (beta) estimates:
##                            2.5%          50%       97.5% Rhat n.eff
## (Intercept)          1.86907001  2.979839364  4.10063925 1.84    11
## s(season).1         -0.48213694 -0.242738591  0.02911646 1.35    68
## s(season).2         -0.84482661 -0.607908026 -0.36730591 1.11    88
## s(season).3         -0.85784320 -0.573808613 -0.28478389 1.02    75
## s(season).4         -0.60586317 -0.337862904 -0.05833544 1.02    72
## s(season).5         -0.49418453 -0.222215222  0.08819482 1.17    70
## s(season).6         -0.32010700  0.023035117  0.36486561 1.20    62
## s(season).7          0.04207088  0.436779007  0.79801513 1.25    51
## s(season).8          0.45516270  0.787892709  1.11590328 1.20    60
## s(season).9          0.45117473  0.672135745  0.88919383 1.04    72
## s(season).10         0.01092728  0.247756134  0.47644112 1.02    64
## s(season,series).1  -0.07729531  0.140109893  0.35368812 1.04   106
## s(season,series).2  -0.41337953 -0.171165585  0.06045261 1.01   163
## s(season,series).3  -0.35074618 -0.125097582  0.11690366 1.21    46
## s(season,series).4   0.01000743  0.135214602  0.25495306 1.13    81
## s(season,series).5  -0.69687711  0.398467429  1.50217593 1.94    14
## s(season,series).6  -0.39264293 -0.132767694  0.11854240 1.02   260
## s(season,series).7  -0.36835619 -0.078184927  0.20645131 1.00   396
## s(season,series).8  -0.29586239 -0.052178089  0.20277396 1.14   108
## s(season,series).9  -0.14217236 -0.006464029  0.12435206 1.09   117
## s(season,series).10 -2.51288311 -1.392867089 -0.29468696 1.91    18
## s(season,series).11 -0.13746789  0.094873856  0.32278629 1.04   132
## s(season,series).12 -0.09309094  0.158419041  0.41119403 1.01   208
## s(season,series).13 -0.07675587  0.155017092  0.40238454 1.19    64
## s(season,series).14 -0.05416828  0.075193716  0.19613050 1.11    78
## s(season,series).15 -1.41032710 -0.295883645  0.81622292 1.93    16
## s(season,series).16 -0.36108693 -0.135165829  0.07679137 1.05   111
## s(season,series).17 -0.23393558  0.008478669  0.25142944 1.01   153
## s(season,series).18 -0.27202625 -0.039340886  0.19628062 1.21    54
## s(season,series).19 -0.14949337 -0.023390690  0.09676790 1.12    84
## s(season,series).20 -0.67106912  0.430904299  1.51240322 1.95    16
## 
## GAM smoothing parameter (rho) estimates:
##                        2.5%       50%      97.5% Rhat n.eff
## s(season)          4.940568  6.103674  7.0190841 1.01   649
## s(season,series)   2.045810  2.849539  3.5005161 1.00  2147
## s(season,series)2 -3.450028 -1.980948 -0.9952648 1.00  7793
## 
## Latent trend drift (phi) and AR parameter estimates:
##               2.5%        50%     97.5% Rhat n.eff
## phi[1]  0.00000000 0.00000000 0.0000000  NaN     0
## phi[2]  0.00000000 0.00000000 0.0000000  NaN     0
## phi[3]  0.00000000 0.00000000 0.0000000  NaN     0
## phi[4]  0.00000000 0.00000000 0.0000000  NaN     0
## ar1[1] -0.26560937 0.05044540 0.3546245 1.00  4618
## ar1[2]  0.05932782 0.69641363 1.0005989 1.03   135
## ar1[3] -0.36148165 0.09039014 0.6086111 1.01  1120
## ar1[4] -0.53237798 0.06119639 0.8249580 1.01   658
## ar2[1]  0.00000000 0.00000000 0.0000000  NaN     0
## ar2[2]  0.00000000 0.00000000 0.0000000  NaN     0
## ar2[3]  0.00000000 0.00000000 0.0000000  NaN     0
## ar2[4]  0.00000000 0.00000000 0.0000000  NaN     0
## ar3[1]  0.00000000 0.00000000 0.0000000  NaN     0
## ar3[2]  0.00000000 0.00000000 0.0000000  NaN     0
## ar3[3]  0.00000000 0.00000000 0.0000000  NaN     0
## ar3[4]  0.00000000 0.00000000 0.0000000  NaN     0
## 

Look at Dunn-Smyth residuals for some series from this preferred model to ensure that our dynamic factor process has captured most of the temporal dependencies in the observations

plot_mvgam_resids(trends_mod3, 1)

plot_mvgam_resids(trends_mod3, 2)

plot_mvgam_resids(trends_mod3, 3)

plot_mvgam_resids(trends_mod3, 4)

Perform posterior predictive checks to see if the model is able to simulate data that looks realistic and unbiased by examining simulated kernel densities for posterior predictions (yhat) compared to the density of the observations (y). This will be particularly useful for examining whether the Negative Binomial observation model is able to produce realistic looking simulations for each individual series.

plot_mvgam_ppc(trends_mod3, series = 1, type = "density")

plot_mvgam_ppc(trends_mod3, series = 2, type = "density")

plot_mvgam_ppc(trends_mod3, series = 3, type = "density")

plot_mvgam_ppc(trends_mod3, series = 4, type = "density")

Look at traceplots for the smoothing parameters (rho)

plot_mvgam_trace(object = trends_mod3, param = "rho")

Plot posterior predictive distributions for the training and testing periods for each series

plot_mvgam_fc(object = trends_mod3, series = 1, 
    data_test = trends_data$data_test)

plot_mvgam_fc(object = trends_mod3, series = 2, 
    data_test = trends_data$data_test)

plot_mvgam_fc(object = trends_mod3, series = 3, 
    data_test = trends_data$data_test)

plot_mvgam_fc(object = trends_mod3, series = 4, 
    data_test = trends_data$data_test)

Plot posterior distributions for the latent trend estimates, again for the training and testing periods

plot_mvgam_trend(object = trends_mod3, series = 1, 
    data_test = trends_data$data_test)

plot_mvgam_trend(object = trends_mod3, series = 2, 
    data_test = trends_data$data_test)

plot_mvgam_trend(object = trends_mod3, series = 3, 
    data_test = trends_data$data_test)

plot_mvgam_trend(object = trends_mod3, series = 4, 
    data_test = trends_data$data_test)

Given that we fit a model with hierarchical seasonality, the seasonal smooths are able to deviate from one another (though they share the same wiggliness and all deviate from a common 'global' seasonal function)

plot_mvgam_smooth(object = trends_mod3, series = 1, 
    smooth = "season")

plot_mvgam_smooth(object = trends_mod3, series = 2, 
    smooth = "season")

plot_mvgam_smooth(object = trends_mod3, series = 3, 
    smooth = "season")

plot_mvgam_smooth(object = trends_mod3, series = 4, 
    smooth = "season")

Plot posterior mean estimates of latent trend correlations. These correlations are more useful than looking at latent factor loadings, for example to inspect ordinations. This is because the orders of the loadings (although constrained for identifiability purposes) can vary from chain to chain

correlations <- lv_correlations(object = trends_mod3)
library(ggplot2)
mean_correlations <- correlations$mean_correlations
mean_correlations[upper.tri(mean_correlations)] <- NA
mean_correlations <- data.frame(mean_correlations)
ggplot(mean_correlations %>% tibble::rownames_to_column("series1") %>% 
    tidyr::pivot_longer(-c(series1), names_to = "series2", 
        values_to = "Correlation"), aes(x = series1, 
    y = series2)) + geom_tile(aes(fill = Correlation)) + 
    scale_fill_gradient2(low = "darkred", 
        mid = "white", high = "darkblue", 
        midpoint = 0, breaks = seq(-1, 1, 
            length.out = 5), limits = c(-1, 
            1), name = "Trend\ncorrelation") + 
    labs(x = "", y = "") + theme_dark() + 
    theme(axis.text.x = element_text(angle = 45, 
        hjust = 1))

There is certainly some evidence of positive trend correlations for a few of these search terms, which is not surprising given how similar some of them are and how closely linked they should be to interest about tick paralysis in Queensland. Plot some STL decompositions of these series to see if these trends are noticeable in the data

plot(stl(ts(as.vector(series$`tick paralysis`), 
    frequency = 12), "periodic"))

plot(stl(ts(as.vector(series$`paralysis tick dog`), 
    frequency = 12), "periodic"))

plot(stl(ts(as.vector(series$`dog tick`), 
    frequency = 12), "periodic"))

plot(stl(ts(as.vector(series$`tick bite`), 
    frequency = 12), "periodic"))

Forecast period posterior predictive checks suggest that the model still has room for improvement:

plot_mvgam_ppc(trends_mod3, series = 1, type = "density", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 1, type = "mean", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 2, type = "density", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 2, type = "mean", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 3, type = "density", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 3, type = "mean", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 4, type = "density", 
    data_test = trends_data$data_test)

plot_mvgam_ppc(trends_mod3, series = 4, type = "mean", 
    data_test = trends_data$data_test)

Other next steps could involve devising a more goal-specific set of posterior predictive checks (see this paper by Gelman et al and relevant works by Betancourt for examples) and compare out of sample Discrete Rank Probability Scores for this model and other versions for the latent trends (i.e. AR2, AR3, Random Walk)